In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project RUBRIC. Please save regularly.
By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
I. Exploratory Data Analysis
II. Rank Based Recommendations
III. User-User Based Collaborative Filtering
IV. Content Based Recommendations (EXTRA - NOT REQUIRED)
V. Matrix Factorization
VI. Extras & Concluding
At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
import pandas as pd
import numpy as np
import plotly.express as px
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
import pickle
%matplotlib inline
import os
working_directory=os.getcwd()
print(working_directory)
/Users/lindomavimbela/Downloads
df = pd.read_csv(working_directory +'/user-item-interactions.csv')
df_content = pd.read_csv(working_directory +'/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
df.head()
| article_id | title | ||
|---|---|---|---|
| 0 | 1430.0 | using pixiedust for fast, flexible, and easier... | ef5f11f77ba020cd36e1105a00ab868bbdbf7fe7 |
| 1 | 1314.0 | healthcare python streaming application demo | 083cbdfa93c8444beaa4c5f5e0f5f9198e4f9e0b |
| 2 | 1429.0 | use deep learning for image classification | b96a4f2e92d8572034b1e9b28f9ac673765cd074 |
| 3 | 1338.0 | ml optimization using cognitive assistant | 06485706b34a5c9bf2a0ecdac41daf7e7654ceb7 |
| 4 | 1276.0 | deploy your python model as a restful api | f01220c46fc92c6e6b161b1849de11faacd7ccb2 |
# Show df_content to get an idea of the data
df_content.head()
| doc_body | doc_description | doc_full_name | doc_status | article_id | |
|---|---|---|---|---|---|
| 0 | Skip navigation Sign in SearchLoading...\r\n\r... | Detect bad readings in real time using Python ... | Detect Malfunctioning IoT Sensors with Streami... | Live | 0 |
| 1 | No Free Hunch Navigation * kaggle.com\r\n\r\n ... | See the forest, see the trees. Here lies the c... | Communicating data science: A guide to present... | Live | 1 |
| 2 | ☰ * Login\r\n * Sign Up\r\n\r\n * Learning Pat... | Here’s this week’s news in Data Science and Bi... | This Week in Data Science (April 18, 2017) | Live | 2 |
| 3 | DATALAYER: HIGH THROUGHPUT, LOW LATENCY AT SCA... | Learn how distributed DBs solve the problem of... | DataLayer Conference: Boost the performance of... | Live | 3 |
| 4 | Skip navigation Sign in SearchLoading...\r\n\r... | This video demonstrates the power of IBM DataS... | Analyze NY Restaurant data using Spark in DSX | Live | 4 |
Use the dictionary and cells below to provide some insight into the descriptive statistics of the data.
1. What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
df.isnull().sum()
article_id 0 title 0 email 17 dtype: int64
df_content.isnull().sum()
doc_body 14 doc_description 3 doc_full_name 0 doc_status 0 article_id 0 dtype: int64
# number of articles published
len(set(df.article_id))
714
# Count user interaction
user_engagement = df.email.value_counts(dropna=False)
user_engagement
2b6c0f514c2f2b04ad3c4583407dccd0810469ee 364
77959baaa9895a7e2bdc9297f8b27c1b6f2cb52a 363
2f5c7feae533ce046f2cb16fb3a29fe00528ed66 170
a37adec71b667b297ed2440a9ff7dad427c7ac85 169
8510a5010a5d4c89f5b07baac6de80cd12cfaf93 160
...
f5035acf16af3e79700393838fa1023ad38da668 1
81335c2e5917100a5cbdcc2bc0285fed6d685f6d 1
98d4864a24bc8f9915c8c8b5ebd3aa1eaa71cbaf 1
c87e297a1a99ae042be2015ff9056cf13195eefd 1
1f18e8aaccd6c8720180c3fe264c8aef5b00697f 1
Name: email, Length: 5149, dtype: int64
fig = px.histogram(user_engagement, x="email", nbins=150)
fig.show()
df_summary = df.groupby('email')['article_id'].count()
df_summary.describe()
count 5148.000000 mean 8.930847 std 16.802267 min 1.000000 25% 1.000000 50% 3.000000 75% 9.000000 max 364.000000 Name: article_id, dtype: float64
df_summary.median()
3.0
median_val = 3 # 50% of individuals interact with ____ number of articles or fewer.
max_views_by_user = 364 # The maximum number of user-article interactions by any 1 user is ______.
2. Explore and remove duplicate articles from the df_content dataframe.
# Find and explore duplicate articles
df_content.duplicated(subset='article_id',keep='first').sum()
5
# Remove any rows that have the same article_id - only keep the first
df_content.drop_duplicates(subset=['article_id'], inplace=True)
3. Use the cells below to find:
a. The number of unique articles that have an interaction with a user.
b. The number of unique articles in the dataset (whether they have any interactions or not).
c. The number of unique users in the dataset. (excluding null values)
d. The number of user-article interactions in the dataset.
most_articles = df.article_id.value_counts(dropna=False)
most_articles
1429.0 937
1330.0 927
1431.0 671
1427.0 643
1364.0 627
...
1344.0 1
984.0 1
1113.0 1
675.0 1
662.0 1
Name: article_id, Length: 714, dtype: int64
# number of unique articles
len(set(df.article_id))
714
# The number of unique articles on the IBM platform
total_articles = df_content.shape[0]
total_articles
1051
df.nunique()
article_id 714 title 714 email 5148 dtype: int64
df.shape[0]
45993
unique_articles = 714 # The number of unique articles that have at least one interaction
total_articles = 1051 # The number of unique articles on the IBM platform
unique_users = 5148 # The number of unique users
user_article_interactions = 45993 # The number of user-article interactions
4. Use the cells below to find the most viewed article_id, as well as how often it was viewed. After talking to the company leaders, the email_mapper function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
df.groupby(by='article_id').count().sort_values(by='email', ascending=False).head()
| title | ||
|---|---|---|
| article_id | ||
| 1429.0 | 937 | 937 |
| 1330.0 | 927 | 927 |
| 1431.0 | 671 | 671 |
| 1427.0 | 643 | 643 |
| 1364.0 | 627 | 627 |
most_viewed_article_id = 1429.0 # The most viewed article in the dataset as a string with one value following the decimal
max_views = 937 # The most viewed article in the dataset was viewed how many times?
# Map the user email to a user_id column and remove the email column
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
| article_id | title | user_id | |
|---|---|---|---|
| 0 | 1430.0 | using pixiedust for fast, flexible, and easier... | 1 |
| 1 | 1314.0 | healthcare python streaming application demo | 2 |
| 2 | 1429.0 | use deep learning for image classification | 3 |
| 3 | 1338.0 | ml optimization using cognitive assistant | 4 |
| 4 | 1276.0 | deploy your python model as a restful api | 5 |
Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
1. Fill in the function below to return the n top articles ordered with most interactions as the top. Test your function using the tests below.
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# list of top articles
top_articles = list(df.groupby(by='title').count().sort_values(by='user_id', ascending=False).head(n).index)
return top_articles # Return the top article titles from df (not df_content)
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
top_articles = list(df.groupby(by='article_id').count().sort_values(by='user_id', ascending=False).head(n).index)
return top_articles # Return the top article ids
print(get_top_articles(5))
print(get_top_article_ids(5))
['use deep learning for image classification', 'insights from new york car accident reports', 'visualize car data with brunel', 'use xgboost, scikit-learn & ibm watson machine learning apis', 'predicting churn with the spss random tree algorithm'] [1429.0, 1330.0, 1431.0, 1427.0, 1364.0]
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
1. Use the function below to reformat the df dataframe to be shaped with users as the rows and articles as the columns.
Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
'''
# create the user-article matrix with 1's and 0's
user_item = df.groupby(by=['user_id', 'article_id']).agg(lambda x: 1).unstack().fillna(0)
return user_item # return the user_item matrix
user_item = create_user_item_matrix(df)
user_item_matrix=create_user_item_matrix(df)
user_item_matrix.to_pickle("user_item_matrix.pkl")
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
You have passed our quick tests! Please proceed!
2. Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
Use the tests to test your function.
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users (largest dot product users)
are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
# compute similarity of each user to the provided user
similar_user_matrix = user_item.dot(np.transpose(user_item))
similarity_for_user_id = similar_user_matrix.loc[user_id, :]
# sort by similarity
similarity_for_user_id = similarity_for_user_id.sort_values(ascending=False)
# create list of just the ids
similar_users = similarity_for_user_id.index.tolist()
# remove the own user's id
similar_users.remove(user_id)
return similar_users # return a list of the users in order from most to least similar
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
The 10 most similar users to user 1 are: [3933, 23, 3782, 203, 4459, 3870, 131, 4201, 46, 5041] The 5 most similar users to user 3933 are: [1, 23, 3782, 203, 4459] The 3 most similar users to user 46 are: [4201, 3782, 23]
3. Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column)
'''
article_names = [df[df['article_id']==float(id)]['title'].values[0] for id in article_ids]
return article_names # Return the article names associated with list of article ids
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the doc_full_name column in df_content)
Description:
Provides a list of the article_ids and article titles that have been seen by a user
'''
# Your code here
article_ids = [str(id) for id in list(user_item.loc[user_id][user_item.loc[user_id]==1].title.index)]
article_names = get_article_names(article_ids)
return article_ids, article_names # return the ids and names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
# Get user articles
article_ids, _ = get_user_articles(user_id)
# Find similar users
most_similar_users = find_similar_users(user_id)
recs = []
for user in most_similar_users:
# find the list of articles that are not seen by the user
ids, _ = get_user_articles(user)
article_not_seen = np.setdiff1d(np.array(ids), np.array(article_ids))
# Update recs with new recs
article_not_seen_recs = np.setdiff1d(article_not_seen, np.array(recs))
recs.extend(list(article_not_seen_recs))
# exit loop once recommendations reach the specified value of m.
if len(recs) > m:
break
recs = recs[:10]
return recs # return your recommendations for this user_id
get_article_names(user_user_recs(2,10))
['mapping points with folium', 'a comparison of logistic regression and naive bayes ', 'access mysql with python', 'tidy up your jupyter notebooks with scripts', 'analyze accident reports on amazon emr spark', 'analyze energy consumption in buildings', 'analyze open data sets with spark & pixiedust', 'analyze open data sets with pandas dataframes', 'analyze precipitation data', 'analyzing data by using the sparkling.data library features']
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
If this is all you see, you passed all of our tests! Nice job!
4. Now we are going to improve the consistency of the user_user_recs function from above.
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each user to the provided user_id
num_interactions - the number of articles viewed by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number of interactions where
highest of each is higher in the dataframe
'''
# get similar users to the user sorted from most to least
get_similar_users = find_similar_users(user_id)
# compute similarity using the dot-product between given user and all others
similarity = np.dot(user_item.loc[user_id].values, user_item.loc[get_similar_users].values.T)
# count interactions
user_interaction_count = user_item.apply(np.count_nonzero, axis=1)
# DataFrame
neighbors_df = pd.DataFrame({
'neighbor_id': get_similar_users,
'similarity': similarity
})
# add interatcions column
neighbors_df['num_interactions'] = neighbors_df['neighbor_id'].map(user_interaction_count)
# sort by interactions
neighbors_df.sort_values(by=['similarity', 'num_interactions',], ascending=False)
return neighbors_df
get_top_sorted_users(10).head()
| neighbor_id | similarity | num_interactions | |
|---|---|---|---|
| 0 | 3354 | 17.0 | 17 |
| 1 | 3697 | 15.0 | 100 |
| 2 | 49 | 15.0 | 101 |
| 3 | 3764 | 14.0 | 97 |
| 4 | 98 | 14.0 | 97 |
def user_user_recs_part2(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
# get user articles
article_ids, _ = get_user_articles(user_id)
# find similar users
most_similar_users = list(get_top_sorted_users(user_id).neighbor_id)
recs = []
for user in most_similar_users:
#find articles not seen by the user
ids, _ = get_user_articles(user)
article_not_seen = np.setdiff1d(np.array(ids), np.array(article_ids))
#update recs with new recommenations
article_not_seen_recs = np.setdiff1d(article_not_seen, np.array(recs))
recs.extend(list(article_not_seen_recs))
# exit loop once recommendations reach the required m value
if len(recs) > m:
break
recs = recs[:10]
rec_names = get_article_names(recs)
return recs, rec_names
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
The top 10 recommendations for user 20 are the following article ids: ['1052.0', '1059.0', '1161.0', '1162.0', '1163.0', '1164.0', '1169.0', '1172.0', '1173.0', '1175.0'] The top 10 recommendations for user 20 are the following article names: ['access db2 warehouse on cloud and db2 with python', 'airbnb data for analytics: amsterdam calendar', 'analyze data, build a dashboard with spark and pixiedust', 'analyze energy consumption in buildings', 'analyze open data sets with spark & pixiedust', 'analyze open data sets with pandas dataframes', 'annual precipitation by country 1990-2009', 'apache spark lab, part 3: machine learning', 'births attended by skilled health staff (% of total) by country', 'breast cancer detection with xgboost, wml and scikit']
5. Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
### Tests with a dictionary of results
user1_most_sim = get_top_sorted_users(1).neighbor_id.values[0]# Find the user that is most similar to user 1
user131_10th_sim = get_top_sorted_users(131).neighbor_id.values[9]# Find the 10th most similar user to user 131
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
6. If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
Consider knowledge based recommendations where users can be asked to to choose the type of content from defined options so that they recommended options that match their choices.
7. Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
new_user = '0.0'
# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.
# Provide a list of the top 10 article ids you would give to
new_user_recs = [str(x) for x in get_top_article_ids(10)]# Your recommendations here
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
That's right! Nice job!
In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
1. You should have already created a user_item matrix above in question 1 of Part III above. This first question here will just require that you run the cells to get things set up for the rest of Part V of the notebook.
# Load the matrix here
user_item_matrix = pd.read_pickle('user_item_matrix.pkl')
# quick look at the matrix
user_item_matrix.head()
| title | |||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| article_id | 0.0 | 2.0 | 4.0 | 8.0 | 9.0 | 12.0 | 14.0 | 15.0 | 16.0 | 18.0 | ... | 1434.0 | 1435.0 | 1436.0 | 1437.0 | 1439.0 | 1440.0 | 1441.0 | 1442.0 | 1443.0 | 1444.0 |
| user_id | |||||||||||||||||||||
| 1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ... | 0.0 | 0.0 | 1.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ... | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | ... | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 4 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ... | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 5 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ... | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
5 rows × 714 columns
2. In this situation, you can use Singular Value Decomposition from numpy on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
# Perform SVD on the User-Item Matrix Here
u, s, vt = np.linalg.svd(user_item_matrix)# use the built in to get the three matrices
u.shape
(5149, 5149)
s.shape
(714,)
vt.shape
(714, 714)
3. Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_matrix, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
4. From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
#user-item matrix of the training dataframe
user_item_train = create_user_item_matrix(df_train)
#user-item matrix of the testing dataframe
user_item_test = create_user_item_matrix(df_test)
#all of the test user ids
test_idx = list(user_item_test.index.values)
#all of the test article ids
test_arts = user_item_test.title.columns.values
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)
user_item_train.shape
(4487, 714)
user_item_train.shape
(4487, 714)
len(np.setdiff1d(user_item_test.index, user_item_train.index))
662
common_idx = user_item_train.index.isin(test_idx)
common_idx.sum()
20
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?':c, # letter here,
'How many users in the test set are we not able to make predictions for because of the cold start problem?':a, # letter here,
'How many articles can we make predictions for in the test set?':b # letter here,
'How many articles in the test set are we not able to make predictions for because of the cold start problem?':d # letter here
}
t.sol_4_test(sol_4_dict)
5. Now use the user_item_train dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the user_item_test dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions 2 - 4.
Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
# fit SVD on the user_item_train matrix
u_train, s_train, vt_train = np.linalg.svd(user_item_train)
u_train.shape, s_train.shape, vt_train.shape
((4487, 4487), (714,), (714, 714))
# Subset of rows in the user_item_test dataset that you can predict
# Rows that match the test set
test_idx = user_item_test.index
row_idxs = user_item_train.index.isin(test_idx)
# Columns that match the test set
test_col = user_item_test.columns
col_idxs = user_item_train.columns.isin(test_col)
# Test data
train_idx = user_item_train.index
row_idxs_2 = user_item_test.index.isin(train_idx)
sub_user_item_test = user_item_test.loc[row_idxs_2]
u_test = u_train[row_idxs, :]
vt_test = vt_train[:, col_idxs]
latent_feats = np.arange(10, 700+10, 20)
errs_train, errs_test = [], []
for k in latent_feats:
# restructure with k latent features
s_train_lat, u_train_lat, vt_train_lat = np.diag(s_train[:k]), u_train[:, :k], vt_train[:k, :]
u_test_lat, vt_test_lat = u_test[:, :k], vt_test[:k, :]
# take dot product
user_item_train_preds = np.around(np.dot(np.dot(u_train_lat, s_train_lat), vt_train_lat))
user_item_test_preds = np.around(np.dot(np.dot(u_test_lat, s_train_lat), vt_test_lat))
# compute prediction accuracy
errs_train.append(accuracy_score(user_item_train.values.flatten(), user_item_train_preds.flatten()))
errs_test.append(accuracy_score(sub_user_item_test.values.flatten(), user_item_test_preds.flatten()))
plt.figure()
plt.plot(latent_feats, errs_train, label='Train')
plt.plot(latent_feats, errs_test, label='Test')
plt.xlabel('Number of Latent Features')
plt.ylabel('Accuracy')
plt.title('Accuracy vs. Number of Latent Features')
plt.legend()
plt.show()
6. Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
From the results we can observe an increase in in the number of features shows an increase in the accuracy of predictions for the training data set, however in the same light this results in a decrease in accuracy for the test data set.
collaborative filtering or content based recommendation can be used to improve the prediction since common users between the train and test are too few (only 20 out of over 4000 users in the train set are also in the test set)
A/B testing could or user group analysis coul be used to get feedback on the performance of the recommendation engine